# Few-shot Adaptation
Cuckoo C4 Super Rainbow
Apache-2.0
Cuckoo is a 300-million-parameter information extraction model that mimics the next-token prediction paradigm of large language models for information extraction, capable of self-enhancement using various text resources.
Large Language Model
Transformers

C
KomeijiForce
159
1
Cuckoo C4 Rainbow
Apache-2.0
Cuckoo is a small (300M parameters) information extraction (IE) model that mimics the next-token prediction paradigm of large language models by predicting the next token in a given context.
Knowledge Graph
Transformers

C
KomeijiForce
17
1
Cuckoo C4 Instruct
MIT
Super Rainbow Cuckoo is a small-scale information extraction model based on the Next Token Extraction (NTE) paradigm, achieving efficient information extraction by mimicking the prediction methods of large language models.
Question Answering System
Transformers

C
KomeijiForce
17
1
Smart Lemon Cookie 7B
Smart-Lemon-Cookie-7B is a pre-trained language model merged using mergekit, combining the strengths of multiple models, suitable for text generation and role-playing tasks.
Large Language Model
Transformers English

S
FallenMerick
40
4
Blurdus 7b V0.1
Apache-2.0
Blurdus-7b-v0.1 is a hybrid model obtained by merging three 7B-parameter models using LazyMergekit, demonstrating excellent performance across multiple benchmarks.
Large Language Model
Transformers

B
gate369
80
1
Supermario V2
Apache-2.0
supermario-v2 is a merged model based on Mistral-7B-v0.1, combining three different models using the DARE_TIES method, with strong text generation capabilities.
Large Language Model
Transformers English

S
jan-hq
77
8
Timesformer Base Finetuned K400 Continual Lora Ucf101 Continual Lora Ucf101
A video action recognition model based on TimeSformer architecture, pre-trained on Kinetics-400 dataset and fine-tuned on UCF101 dataset
Video Processing
Transformers

T
NiiCole
18
0
Vit Base Patch16 224 In21k Iiii
Apache-2.0
This model is a fine-tuned Vision Transformer based on google/vit-base-patch16-224-in21k, primarily used for image classification tasks.
Image Classification
Transformers

V
Imene
21
0
Resnet 50 Finetuned Resnet50 0831
Apache-2.0
An image classification model fine-tuned on image folder dataset based on Microsoft's ResNet-50 model, achieving 97.64% accuracy
Image Classification
Transformers

R
morganchen1007
27
0
Chinese Roberta L 2 H 128
This is a Chinese RoBERTa medium model pre-trained on CLUECorpusSmall, featuring 8 layers and 512-dimensional hidden layers, suitable for various Chinese natural language processing tasks.
Large Language Model Chinese
C
uer
1,141
11
Featured Recommended AI Models